Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Write/retrieve chunks using postgres #17

Merged
merged 5 commits into from
Jan 26, 2024
Merged

Conversation

bjchambers
Copy link
Contributor

This removes the dependency on Redis, and makes the chunks/embeddings in the postgres database work.

There are some issues to be addressed, specifically deduplicating cases where multiple embeddings of the same chunk are retrieved. I plan to work on those in a follow-up PR, so that we can get the bulk of this in first.

This removes the dependency on Redis, and makes the chunks/embeddings
in the postgres database work.

There are some issues to be addressed, specifically deduplicating cases
where multiple embeddings of the same chunk are retrieved. I plan to
work on those in a follow-up PR, so that we can get the bulk of this in
first.
dewy/common/collection_embeddings.py Outdated Show resolved Hide resolved
FROM relevant_embeddings
JOIN chunk
ON chunk.id = relevant_embeddings.chunk_id
LIMIT $2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like we could invert this and use SELECT DISTINCT ... from chunk to get the deduplicated chunks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe? I want to get something in first and then play with it. I'd like to be able to point a pgsql repl at the database with chunks loaded in, and then see what works (and also use explain to see what the query does, etc.). Deferring.

dewy/common/collection_embeddings.py Show resolved Hide resolved
url, extract_tables=self.extract_tables, extract_images=self.extract_images
)
if extracted.is_empty():
logger.error(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is an error, shouldn't it throw an exception?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could -- but with background tasks, there isn't really anything to do with that error. What I think we actually need to do is mark the document (or ingestion associated with the document) as failed and/or do some kind of dead letter. That said -- perhaps we shouldn't treat this as an error?


# Then, embed each of those chunks.
# We assume no chunks for the document existed before, so we can iterate
# over the chunks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not return the chunk ID's or something? This seems like an assumption that's going to cause bugs as soon as we support updating a document.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could. My thinking was that we could write them into the DB rather than trying to keep them in memory and then read them back out. But, I think that both llamaindex and various other embeddings will lead to the whole text having to fit in memory anyway during an ingest, so maybe it doesn't matter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I take that back. There isn't a great way to do that. Specifically:

  1. This uses executemany, which doesn't return anything.
  2. If we use fetch, we can't provide a list of rows to insert -- it needs to be a single query.

I think I'll leave as is for this PR. I think we could handle update in a variety of ways:

  1. Introduce a new document ID and delete the old one.
  2. Add a "version" to each chunk, and query for only the chunks related to the current version.
  3. etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, including an "ingest version" or something that we could filter on the other side would work.

@bjchambers bjchambers merged commit cda4f8b into main Jan 26, 2024
@bjchambers bjchambers deleted the pgvector-embeddings branch January 26, 2024 22:34
@bjchambers bjchambers added the enhancement New feature or request label Jan 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants